Home |
| Latest | About | Random
# Additional notes. ## Further discussion of matrix representation with respect to different basis sets. (With much internal struggle, I will make this part of the reading optional but it is for your own good and greater insight.) First let me make a remark on notation. If we have a linear map $T:\mathbb R^{k} \to \mathbb R^{n}$, and we set ordered basis $\beta$ for $\mathbb R^{k}$ and $\gamma$ for $\mathbb R^{n}$, then we have $[T]_{\beta}^{\gamma} = P_{\gamma}^{-1} [T]_{std_{k}}^{std_{n}} P_{\beta}$, which let me remind you by tracing out the diagram $$ \begin{array}{} \underset{(\text{std}_{k})}{\mathbb R^{k}} & \xrightarrow{[T]_{\text{std}_{k}}^{\text{std}_{n}}} & \underset{(\text{std}_{n})}{\mathbb R^{n}} \\ \color{blue} P_{\beta}\uparrow\ \ \ \ \ &\quad ⟲ \quad & \ \ \ \ \ \ \color{blue} \downarrow P_{\gamma}^{-1}\\ \underset{(\beta)}{\mathbb R^{k}} & \xrightarrow{[T]_{\beta}^{\gamma}} & \underset{(\gamma)}{\mathbb R^{n}} \end{array} $$But there is something unsatisfactory about this -- We have notation $[T]_{std_{k}}^{std_{n}}$ which stands for a matrix that multiplies to a vector in $std_{k}$-coordinates and returns the output in $std_{n}$-coordinates, and $[T]_{\beta}^{\gamma}$ multiplies to a vector in $\beta$-coordinates, and returns the output in $\gamma$-coordinates. Can we put $P_{\beta}$ and $P_{\gamma}^{-1}$ in this same "bracket notation" as well? Yes. First let us think about what $P_{\beta}$ is doing. It is simply changing the representation of a $\beta$-coordinate back into standard $std_{k}$-coordinate, that is $$ [\vec x]_{std_{k}} = P_{\beta}[\vec x]_{\beta} $$and since $\vec x = I_{k\times k}\vec x$, where $I_{k\times k}$ is the identity linear map on $\mathbb R^{k}$, we have $$ [I_{k\times k}\vec x]_{std_k} = P_{\beta}[\vec x]_{\beta} $$But if you review what this notation $[T]_{\beta}^{\gamma}$ means in general, we see that $$ P_{\beta} = [I_{k\times k}]_{\beta}^{std_{k}}. $$So $P_{\beta}$ really is the matrix representation of the identity map that takes in $\beta$-coordinates and return $std_{k}$-coordinates. Similarly, $P_{\gamma}^{-1}$ is such that $P_{\gamma}^{-1}\vec y=P_{\gamma}^{-1}[\vec y]_{std_{n}} = [\vec y]_{\gamma} = [I_{n\times n} \vec y]_{\gamma}$. So $$ P_{\gamma}^{-1} = [I_{n\times n}]_{std_{n}}^{\gamma}. $$ With this view, we have diagram $$ \begin{array}{} \underset{(\text{std}_{k})}{\mathbb R^{k}} & \xrightarrow{[T]_{\text{std}_{k}}^{\text{std}_{n}}} & \underset{(\text{std}_{n})}{\mathbb R^{n}} \\ \color{blue} [I_{k\times k}]_{\beta}^{std_{k}}\uparrow\ \ \ \ \ &\quad ⟲ \quad & \ \ \ \ \ \ \color{blue} \downarrow [I_{n\times n}]_{std_{n}}^{\gamma}\\ \underset{(\beta)}{\mathbb R^{k}} & \xrightarrow{[T]_{\beta}^{\gamma}} & \underset{(\gamma)}{\mathbb R^{n}} \end{array} $$and we have the relation $$ [T]_{\beta}^{\gamma} = [I_{n\times n}]_{std_{n}}^{\gamma}[T]_{std_{k}}^{std_{n}} [I_{k\times k}]_{\beta}^{std_{k}} $$ This illustrates the beauty of this bracket notation. If we trace the subscripts going up to the superscripts, and from right to left (since function composition order), it all makes sense! Further, since $P_{\gamma}^{-1} = [I_{n\times n}]_{std_{n}}^{\gamma}$, then we also have $P_{\gamma}=[I_{n\times n}]_{\gamma}^{std_{n}}$, since they are just change of basis matrices but going in different directions. This is better notation, because $P_{\beta}$ is not descriptive enough (for example a stranger wouldn't necessarily guess its meaning), while $[I_{k\times k}]_{\beta}^{std_{k}}$ tells me exactly what it is doing. You can continue using $P_{\beta}$ since we have establish its meaning in our class. Bottomline, > If $\beta = (\vec b_{1},\ldots,\vec b_{k})$ is an ordered basis for $\mathbb R^{k}$, then $$ [I_{k\times k}]_{\beta}^{std_{k}} = P_{\beta} $$and $$ [I_{k\times k}]_{std_{k}}^{\beta} = P_{\beta}^{-1}. $$